Image Classification using tf.keras

In this Colab you will classify images of flowers. You will build an image classifier using tf.keras.Sequential model and load data using tf.keras.preprocessing.image.ImageDataGenerator.

Importing Packages

Let's start by importing required packages. os package is used to read files and directory structure, numpy is used to convert python list to numpy array and to perform required matrix operations and matplotlib.pyplot is used to plot the graph and display images in our training and validation data.

In [1]:
import os
import numpy as np
import glob
import shutil
import matplotlib.pyplot as plt

TODO: Import TensorFlow and Keras Layers

In the cell below, import Tensorflow as tf and the Keras layers and models you will use to build your CNN. Also, import the ImageDataGenerator from Keras so that you can perform image augmentation.

In [2]:
import tensorflow as tf
In [3]:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator

Data Loading

In order to build our image classifier, we can begin by downloading the flowers dataset. We first need to download the archive version of the dataset and after the download we are storing it to "/tmp/" directory.

After downloading the dataset, we need to extract its contents.

In [4]:
_URL = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"

zip_file = tf.keras.utils.get_file(origin=_URL,
                                   fname="flower_photos.tgz",
                                   extract=True)

base_dir = os.path.join(os.path.dirname(zip_file), 'flower_photos')

The dataset we downloaded contains images of 5 types of flowers:

  1. Rose
  2. Daisy
  3. Dandelion
  4. Sunflowers
  5. Tulips

So, let's create the labels for these 5 classes:

In [5]:
classes = ['roses', 'daisy', 'dandelion', 'sunflowers', 'tulips']

Also, the dataset we have downloaded has following directory structure.

flower_photos
|__ diasy
|__ dandelion
|__ roses
|__ sunflowers
|__ tulips

As you can see there are no folders containing training and validation data. Therefore, we will have to create our own training and validation set. Let's write some code that will do this.

The code below creates a train and a val folder each containing 5 folders (one for each type of flower). It then moves the images from the original folders to these new folders such that 80% of the images go to the training set and 20% of the images go into the validation set. In the end our directory will have the following structure:

flower_photos
|__ diasy
|__ dandelion
|__ roses
|__ sunflowers
|__ tulips
|__ train
    |______ daisy: [1.jpg, 2.jpg, 3.jpg ....]
    |______ dandelion: [1.jpg, 2.jpg, 3.jpg ....]
    |______ roses: [1.jpg, 2.jpg, 3.jpg ....]
    |______ sunflowers: [1.jpg, 2.jpg, 3.jpg ....]
    |______ tulips: [1.jpg, 2.jpg, 3.jpg ....]
 |__ val
    |______ daisy: [507.jpg, 508.jpg, 509.jpg ....]
    |______ dandelion: [719.jpg, 720.jpg, 721.jpg ....]
    |______ roses: [514.jpg, 515.jpg, 516.jpg ....]
    |______ sunflowers: [560.jpg, 561.jpg, 562.jpg .....]
    |______ tulips: [640.jpg, 641.jpg, 642.jpg ....]

Since we don't delete the original folders, they will still be in our flower_photos directory, but they will be empty. The code below also prints the total number of flower images we have for each type of flower.

In [6]:
for cl in classes:
  img_path = os.path.join(base_dir, cl)
  images = glob.glob(img_path + '/*.jpg')
  print("{}: {} Images".format(cl, len(images)))
  num_train = int(round(len(images)*0.8))
  train, val = images[:num_train], images[num_train:]

  for t in train:
    if not os.path.exists(os.path.join(base_dir, 'train', cl)):
      os.makedirs(os.path.join(base_dir, 'train', cl))
    shutil.move(t, os.path.join(base_dir, 'train', cl))

  for v in val:
    if not os.path.exists(os.path.join(base_dir, 'val', cl)):
      os.makedirs(os.path.join(base_dir, 'val', cl))
    shutil.move(v, os.path.join(base_dir, 'val', cl))
In [7]:
round(len(images)*0.8)
Out[7]:
513

For convenience, let us set up the path for the training and validation sets

In [8]:
train_dir = os.path.join(base_dir, 'train')
val_dir = os.path.join(base_dir, 'val')

Data Augmentation

Overfitting generally occurs when we have small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples via a number of random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This helps expose the model to more aspects of the data and generalize better.

In tf.keras we can implement this using the same ImageDataGenerator class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.

Experiment with Various Image Transformations

In this section you will get some practice doing some basic image transformations. Before we begin making transformations let's define our batch_size and our image size. Remember that the input to our CNN are images of the same size. We therefore have to resize the images in our dataset to the same size.

TODO: Set Batch and Image Size

In the cell below, create a batch_size of 100 images and set a value to IMG_SHAPE such that our training data consists of images with width of 150 pixels and height of 150 pixels.

In [9]:
batch_size = 100
IMG_SHAPE = 150 

TODO: Apply Random Horizontal Flip

In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and then applies a random horizontal flip. Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, and to shuffle the images.

In [10]:
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)

train_data_gen = image_gen.flow_from_directory(
                                                batch_size=batch_size,
                                                directory=train_dir,
                                                shuffle=True,
                                                target_size=(IMG_SHAPE,IMG_SHAPE)
                                                )
Found 2935 images belonging to 5 classes.

Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.

In [11]:
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
    fig, axes = plt.subplots(1, 5, figsize=(20,20))
    axes = axes.flatten()
    for img, ax in zip( images_arr, axes):
        ax.imshow(img)
    plt.tight_layout()
    plt.show()


augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

TODO: Apply Random Rotation

In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and then applies a random 45 degree rotation. Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, and to shuffle the images.

In [12]:
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)

train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_SHAPE, IMG_SHAPE))
Found 2935 images belonging to 5 classes.

Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.

In [13]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

TODO: Apply Random Zoom

In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and then applies a random zoom of up to 50%. Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, and to shuffle the images.

In [14]:
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)

train_data_gen = image_gen.flow_from_directory(
                                                batch_size=batch_size,
                                                directory=train_dir,
                                                shuffle=True,
                                                target_size=(IMG_SHAPE, IMG_SHAPE)
                                                )
Found 2935 images belonging to 5 classes.

Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.

In [15]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

TODO: Put It All Together

In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and that applies:

  • random 45 degree rotation
  • random zoom of up to 50%
  • random horizontal flip
  • width shift of 0.15
  • height shift of 0.15

Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, to shuffle the images, and to set the class mode to sparse.

In [16]:
image_gen_train = ImageDataGenerator(
                    rescale=1./255,
                    rotation_range=45,
                    width_shift_range=.15,
                    height_shift_range=.15,
                    horizontal_flip=True,
                    zoom_range=0.5
                    )


train_data_gen = image_gen_train.flow_from_directory(
                                                batch_size=batch_size,
                                                directory=train_dir,
                                                shuffle=True,
                                                target_size=(IMG_SHAPE,IMG_SHAPE),
                                                class_mode='sparse'
                                                )
Found 2935 images belonging to 5 classes.

Let's visualize how a single image would look like 5 different times, when we pass these augmentations randomly to our dataset.

In [17]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

TODO: Create a Data Generator for the Validation Set

Generally, we only apply data augmentation to our training examples. So, in the cell below, use ImageDataGenerator to create a transformation that only rescales the images by 255. Then use the .flow_from_directory method to apply the above transformation to the images in our validation set. Make sure you indicate the batch size, the path to the directory of the validation images, the target size for the images, and to set the class mode to sparse. Remember that it is not necessary to shuffle the images in the validation set.

In [18]:
image_gen_val = ImageDataGenerator(rescale=1./255)

val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
                                                 directory=val_dir,
                                                 target_size=(IMG_SHAPE, IMG_SHAPE),
                                                 class_mode='sparse')
Found 735 images belonging to 5 classes.

TODO: Create the CNN

In the cell below, create a convolutional neural network that consists of 3 convolution blocks. Each convolutional block contains a Conv2D layer followed by a max pool layer. The first convolutional block should have 16 filters, the second one should have 32 filters, and the third one should have 64 filters. All convolutional filters should be 3 x 3. All max pool layers should have a pool_size of (2, 2).

After the 3 convolutional blocks you should have a flatten layer followed by a fully connected layer with 512 units. The CNN should output class probabilities based on 5 classes which is done by the softmax activation function. All other layers should use a relu activation function. You should also add Dropout layers with a probability of 20%, where appropriate.

In [19]:
model = Sequential()

model.add(Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_SHAPE,IMG_SHAPE, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))

model.add(Dropout(0.2))
model.add(Dense(5))

TODO: Compile the Model

In the cell below, compile your model using the ADAM optimizer, the sparse cross entropy function as a loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so make sure you also pass the metrics argument.

In [20]:
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

TODO: Train the Model

In the cell below, train your model using the fit_generator function instead of the usual fit function. We have to use the fit_generator function because we are using the ImageDataGenerator class to generate batches of training and validation data for our model. Train the model for 80 epochs and make sure you use the proper parameters in the fit_generator function.

In [21]:
epochs = 80

history = model.fit_generator(
    train_data_gen,
    steps_per_epoch=int(np.ceil(train_data_gen.n / float(batch_size))),
    epochs=epochs,
    validation_data=val_data_gen,
    validation_steps=int(np.ceil(val_data_gen.n / float(batch_size)))
)
WARNING:tensorflow:From <ipython-input-21-7fa7f550ff06>:8: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
Epoch 1/80
30/30 [==============================] - 21s 688ms/step - loss: 1.5508 - accuracy: 0.3601 - val_loss: 1.2413 - val_accuracy: 0.4816
Epoch 2/80
30/30 [==============================] - 21s 696ms/step - loss: 1.1778 - accuracy: 0.5046 - val_loss: 1.0510 - val_accuracy: 0.5959
Epoch 3/80
30/30 [==============================] - 21s 704ms/step - loss: 1.0386 - accuracy: 0.5901 - val_loss: 1.0529 - val_accuracy: 0.6000
Epoch 4/80
30/30 [==============================] - 21s 698ms/step - loss: 0.9911 - accuracy: 0.6126 - val_loss: 0.9257 - val_accuracy: 0.6381
Epoch 5/80
30/30 [==============================] - 21s 695ms/step - loss: 0.9680 - accuracy: 0.6245 - val_loss: 1.0006 - val_accuracy: 0.6163
Epoch 6/80
30/30 [==============================] - 21s 697ms/step - loss: 0.9255 - accuracy: 0.6283 - val_loss: 0.9342 - val_accuracy: 0.6626
Epoch 7/80
30/30 [==============================] - 21s 699ms/step - loss: 0.8839 - accuracy: 0.6630 - val_loss: 0.8242 - val_accuracy: 0.6898
Epoch 8/80
30/30 [==============================] - 21s 703ms/step - loss: 0.8690 - accuracy: 0.6641 - val_loss: 0.8389 - val_accuracy: 0.6694
Epoch 9/80
30/30 [==============================] - 21s 694ms/step - loss: 0.8051 - accuracy: 0.6882 - val_loss: 0.8099 - val_accuracy: 0.6980
Epoch 10/80
30/30 [==============================] - 21s 693ms/step - loss: 0.8119 - accuracy: 0.6828 - val_loss: 0.7804 - val_accuracy: 0.6939
Epoch 11/80
30/30 [==============================] - 21s 695ms/step - loss: 0.7710 - accuracy: 0.6944 - val_loss: 0.7820 - val_accuracy: 0.6871
Epoch 12/80
30/30 [==============================] - 21s 698ms/step - loss: 0.7683 - accuracy: 0.6964 - val_loss: 0.8060 - val_accuracy: 0.7020
Epoch 13/80
30/30 [==============================] - 21s 694ms/step - loss: 0.7754 - accuracy: 0.7019 - val_loss: 0.7372 - val_accuracy: 0.7184
Epoch 14/80
30/30 [==============================] - 21s 696ms/step - loss: 0.7421 - accuracy: 0.7145 - val_loss: 0.7626 - val_accuracy: 0.7224
Epoch 15/80
30/30 [==============================] - 21s 694ms/step - loss: 0.7279 - accuracy: 0.7179 - val_loss: 0.6942 - val_accuracy: 0.7524
Epoch 16/80
30/30 [==============================] - 21s 697ms/step - loss: 0.7299 - accuracy: 0.7124 - val_loss: 0.7562 - val_accuracy: 0.7265
Epoch 17/80
30/30 [==============================] - 21s 696ms/step - loss: 0.6890 - accuracy: 0.7284 - val_loss: 0.7043 - val_accuracy: 0.7469
Epoch 18/80
30/30 [==============================] - 21s 696ms/step - loss: 0.6885 - accuracy: 0.7308 - val_loss: 0.7341 - val_accuracy: 0.7483
Epoch 19/80
30/30 [==============================] - 21s 697ms/step - loss: 0.6784 - accuracy: 0.7298 - val_loss: 0.7418 - val_accuracy: 0.7524
Epoch 20/80
30/30 [==============================] - 21s 694ms/step - loss: 0.6731 - accuracy: 0.7298 - val_loss: 0.7202 - val_accuracy: 0.7415
Epoch 21/80
30/30 [==============================] - 21s 701ms/step - loss: 0.6573 - accuracy: 0.7475 - val_loss: 0.7601 - val_accuracy: 0.7293
Epoch 22/80
30/30 [==============================] - 21s 704ms/step - loss: 0.6689 - accuracy: 0.7319 - val_loss: 0.6878 - val_accuracy: 0.7497
Epoch 23/80
30/30 [==============================] - 21s 698ms/step - loss: 0.6373 - accuracy: 0.7550 - val_loss: 0.7218 - val_accuracy: 0.7388
Epoch 24/80
30/30 [==============================] - 21s 694ms/step - loss: 0.6413 - accuracy: 0.7441 - val_loss: 0.6597 - val_accuracy: 0.7592
Epoch 25/80
30/30 [==============================] - 21s 695ms/step - loss: 0.6210 - accuracy: 0.7639 - val_loss: 0.6486 - val_accuracy: 0.7673
Epoch 26/80
30/30 [==============================] - 21s 693ms/step - loss: 0.6251 - accuracy: 0.7520 - val_loss: 0.7455 - val_accuracy: 0.7388
Epoch 27/80
30/30 [==============================] - 21s 698ms/step - loss: 0.6289 - accuracy: 0.7468 - val_loss: 0.6476 - val_accuracy: 0.7660
Epoch 28/80
30/30 [==============================] - 21s 694ms/step - loss: 0.6100 - accuracy: 0.7591 - val_loss: 0.7097 - val_accuracy: 0.7483
Epoch 29/80
30/30 [==============================] - 21s 697ms/step - loss: 0.6244 - accuracy: 0.7557 - val_loss: 0.6906 - val_accuracy: 0.7592
Epoch 30/80
30/30 [==============================] - 21s 697ms/step - loss: 0.6150 - accuracy: 0.7687 - val_loss: 0.6959 - val_accuracy: 0.7483
Epoch 31/80
30/30 [==============================] - 21s 698ms/step - loss: 0.5980 - accuracy: 0.7717 - val_loss: 0.6394 - val_accuracy: 0.7633
Epoch 32/80
30/30 [==============================] - 21s 695ms/step - loss: 0.5882 - accuracy: 0.7721 - val_loss: 0.6081 - val_accuracy: 0.7646
Epoch 33/80
30/30 [==============================] - 21s 695ms/step - loss: 0.5808 - accuracy: 0.7772 - val_loss: 0.6897 - val_accuracy: 0.7660
Epoch 34/80
30/30 [==============================] - 21s 699ms/step - loss: 0.5602 - accuracy: 0.7813 - val_loss: 0.6349 - val_accuracy: 0.7741
Epoch 35/80
30/30 [==============================] - 21s 699ms/step - loss: 0.5758 - accuracy: 0.7860 - val_loss: 0.6388 - val_accuracy: 0.7646
Epoch 36/80
30/30 [==============================] - 21s 702ms/step - loss: 0.5296 - accuracy: 0.7915 - val_loss: 0.6256 - val_accuracy: 0.7755
Epoch 37/80
30/30 [==============================] - 21s 697ms/step - loss: 0.5566 - accuracy: 0.7836 - val_loss: 0.6198 - val_accuracy: 0.7592
Epoch 38/80
30/30 [==============================] - 21s 700ms/step - loss: 0.5451 - accuracy: 0.7939 - val_loss: 0.6206 - val_accuracy: 0.7769
Epoch 39/80
30/30 [==============================] - 21s 696ms/step - loss: 0.5325 - accuracy: 0.7997 - val_loss: 0.6810 - val_accuracy: 0.7578
Epoch 40/80
30/30 [==============================] - 21s 696ms/step - loss: 0.5190 - accuracy: 0.7956 - val_loss: 0.6631 - val_accuracy: 0.7701
Epoch 41/80
30/30 [==============================] - 21s 695ms/step - loss: 0.5056 - accuracy: 0.8000 - val_loss: 0.6261 - val_accuracy: 0.7782
Epoch 42/80
30/30 [==============================] - 21s 699ms/step - loss: 0.5220 - accuracy: 0.7949 - val_loss: 0.6848 - val_accuracy: 0.7714
Epoch 43/80
30/30 [==============================] - 21s 694ms/step - loss: 0.5228 - accuracy: 0.7949 - val_loss: 0.5928 - val_accuracy: 0.7891
Epoch 44/80
30/30 [==============================] - 21s 693ms/step - loss: 0.5120 - accuracy: 0.7973 - val_loss: 0.6106 - val_accuracy: 0.8041
Epoch 45/80
30/30 [==============================] - 21s 696ms/step - loss: 0.5161 - accuracy: 0.7973 - val_loss: 0.6114 - val_accuracy: 0.7891
Epoch 46/80
30/30 [==============================] - 21s 693ms/step - loss: 0.5126 - accuracy: 0.8003 - val_loss: 0.5759 - val_accuracy: 0.8068
Epoch 47/80
30/30 [==============================] - 21s 696ms/step - loss: 0.5067 - accuracy: 0.8024 - val_loss: 0.6855 - val_accuracy: 0.7660
Epoch 48/80
30/30 [==============================] - 21s 694ms/step - loss: 0.5120 - accuracy: 0.8058 - val_loss: 0.5927 - val_accuracy: 0.7946
Epoch 49/80
30/30 [==============================] - 21s 700ms/step - loss: 0.5085 - accuracy: 0.7922 - val_loss: 0.7120 - val_accuracy: 0.7646
Epoch 50/80
30/30 [==============================] - 21s 695ms/step - loss: 0.4710 - accuracy: 0.8211 - val_loss: 0.7057 - val_accuracy: 0.7823
Epoch 51/80
30/30 [==============================] - 21s 702ms/step - loss: 0.4684 - accuracy: 0.8191 - val_loss: 0.7254 - val_accuracy: 0.7429
Epoch 52/80
30/30 [==============================] - 21s 693ms/step - loss: 0.4857 - accuracy: 0.8072 - val_loss: 0.6076 - val_accuracy: 0.7891
Epoch 53/80
30/30 [==============================] - 21s 697ms/step - loss: 0.4797 - accuracy: 0.8133 - val_loss: 0.6348 - val_accuracy: 0.7932
Epoch 54/80
30/30 [==============================] - 21s 694ms/step - loss: 0.4466 - accuracy: 0.8279 - val_loss: 0.7456 - val_accuracy: 0.7741
Epoch 55/80
30/30 [==============================] - 21s 691ms/step - loss: 0.4698 - accuracy: 0.8245 - val_loss: 0.6034 - val_accuracy: 0.7973
Epoch 56/80
30/30 [==============================] - 21s 695ms/step - loss: 0.4499 - accuracy: 0.8211 - val_loss: 0.5920 - val_accuracy: 0.7973
Epoch 57/80
30/30 [==============================] - 21s 699ms/step - loss: 0.4572 - accuracy: 0.8266 - val_loss: 0.6593 - val_accuracy: 0.7850
Epoch 58/80
30/30 [==============================] - 21s 698ms/step - loss: 0.4306 - accuracy: 0.8412 - val_loss: 0.6089 - val_accuracy: 0.8027
Epoch 59/80
30/30 [==============================] - 21s 699ms/step - loss: 0.4294 - accuracy: 0.8371 - val_loss: 0.6317 - val_accuracy: 0.8082
Epoch 60/80
30/30 [==============================] - 21s 699ms/step - loss: 0.4467 - accuracy: 0.8330 - val_loss: 0.5688 - val_accuracy: 0.7973
Epoch 61/80
30/30 [==============================] - 21s 696ms/step - loss: 0.4359 - accuracy: 0.8348 - val_loss: 0.6318 - val_accuracy: 0.7973
Epoch 62/80
30/30 [==============================] - 21s 697ms/step - loss: 0.4249 - accuracy: 0.8320 - val_loss: 0.6043 - val_accuracy: 0.8082
Epoch 63/80
30/30 [==============================] - 21s 703ms/step - loss: 0.4029 - accuracy: 0.8429 - val_loss: 0.5968 - val_accuracy: 0.7986
Epoch 64/80
30/30 [==============================] - 21s 702ms/step - loss: 0.4309 - accuracy: 0.8337 - val_loss: 0.6530 - val_accuracy: 0.8027
Epoch 65/80
30/30 [==============================] - 21s 707ms/step - loss: 0.4451 - accuracy: 0.8239 - val_loss: 0.7067 - val_accuracy: 0.7891
Epoch 66/80
30/30 [==============================] - 21s 696ms/step - loss: 0.4167 - accuracy: 0.8474 - val_loss: 0.6279 - val_accuracy: 0.8054
Epoch 67/80
30/30 [==============================] - 21s 694ms/step - loss: 0.4322 - accuracy: 0.8354 - val_loss: 0.5913 - val_accuracy: 0.8054
Epoch 68/80
30/30 [==============================] - 21s 700ms/step - loss: 0.3946 - accuracy: 0.8494 - val_loss: 0.6850 - val_accuracy: 0.7946
Epoch 69/80
30/30 [==============================] - 21s 693ms/step - loss: 0.4234 - accuracy: 0.8313 - val_loss: 0.6332 - val_accuracy: 0.7986
Epoch 70/80
30/30 [==============================] - 21s 696ms/step - loss: 0.3936 - accuracy: 0.8402 - val_loss: 0.5814 - val_accuracy: 0.8218
Epoch 71/80
30/30 [==============================] - 21s 696ms/step - loss: 0.3790 - accuracy: 0.8593 - val_loss: 0.6903 - val_accuracy: 0.8027
Epoch 72/80
30/30 [==============================] - 21s 695ms/step - loss: 0.4063 - accuracy: 0.8504 - val_loss: 0.5970 - val_accuracy: 0.8109
Epoch 73/80
30/30 [==============================] - 21s 695ms/step - loss: 0.3793 - accuracy: 0.8528 - val_loss: 0.6631 - val_accuracy: 0.8054
Epoch 74/80
30/30 [==============================] - 21s 694ms/step - loss: 0.4259 - accuracy: 0.8409 - val_loss: 0.5824 - val_accuracy: 0.8218
Epoch 75/80
30/30 [==============================] - 21s 698ms/step - loss: 0.3767 - accuracy: 0.8586 - val_loss: 0.6491 - val_accuracy: 0.8041
Epoch 76/80
30/30 [==============================] - 21s 694ms/step - loss: 0.3754 - accuracy: 0.8545 - val_loss: 0.6189 - val_accuracy: 0.8027
Epoch 77/80
30/30 [==============================] - 21s 698ms/step - loss: 0.3793 - accuracy: 0.8542 - val_loss: 0.6483 - val_accuracy: 0.8136
Epoch 78/80
30/30 [==============================] - 21s 694ms/step - loss: 0.3945 - accuracy: 0.8501 - val_loss: 0.6002 - val_accuracy: 0.8041
Epoch 79/80
30/30 [==============================] - 21s 705ms/step - loss: 0.3661 - accuracy: 0.8589 - val_loss: 0.6369 - val_accuracy: 0.8000
Epoch 80/80
30/30 [==============================] - 21s 712ms/step - loss: 0.3671 - accuracy: 0.8634 - val_loss: 0.6105 - val_accuracy: 0.8014

TODO: Plot Training and Validation Graphs.

In the cell below, plot the training and validation accuracy/loss graphs.

In [22]:
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
In [ ]: